22 Discrete-Time MC

Now we find the optimal p. Let X1,,XN denote the numbers. Order statistics: X(1)<<X(N).

So P(success)=k=1(1p)Np(1p)k(1k)plnp, which is maximized when p=e1.

1 Basics of DTMC

Discrete-Time Markov Chain (DTMC)

{Xn,nN0}. State space S discrete. Then Markov Chain satisfies for all nN,sS P(Xn+1=sn+1|X0=s0,,Xn=sn)=P(Xn+1=sn+1|Xn=sn).

For example, 1d random walk.

Claim (Chapman-Kolmogorov Equation)

n,mN, pij(m+n)=kSpik(m)pkj(n). I.e. (Pn+m)ij=(PnPm)ij.

This claim shows that the n -step Pn is actually the matrix product.


A graphical representation of a homogeneous Markov chain:
Pasted image 20241201180808.png

Some questions of interest: given X0=i{1,,L1},

2 Classification of States

Accessible & Intercommunicate States

  • ij: state j is accessible from i if pij(n)>0 for some nN.
  • ij: states i,j are intercommunicate if ij,ji.

Some Important Values

  • First passage probability fij(n)=P(X1j,,Xn1j,Xn=j|X0=i).
  • Return probability fii=n=1fii(n). (probability of finally coming back)
  • Mean recurrence time ri=n=1nfii(n). (expected time of coming back)

State Definitions

A state iS is called

  1. Recurrent/persistent if fii=1.
    1. Null recurrent if ri=.
    2. Positive recurrent if ri<.
  2. Transient if fii<1.
Theorem

  1. n=1pjj(n)= j is recurrent iS s.t. ij, n=1pij(n)=.
    Recurrent is equivalent to visiting the state in infinitely many times.
  2. n=1pjj(n)< j is transitent iS. n=1pij(n)< and limnpij(n)=0.

3 Hitting and Recurrence

Suppose S={1,,m}B,transient states{m+1,,n}A,absorbing. We can't return from A to B (such A are defined as absorbing states, and B are transient). Then transition matrix can be written as P=[QROS].

Hitting Probability

For iB,jA, hij=P(enters A through jA|X0=i).

Claim

hij=pij+kBpikhkj.

In Matrix form, H=(hij) satisfies H=R+QH. Then if (IQ)1 exists, H=(IQ)1R.

Lemma

Suppose M is a square matrix with limnMn=0. Then (IM)1 exists and (IM)1=i=0Mn.

Back to (IQ)1. Since Pn=[QnOSn]pik(n)=qik(n).
k transient limnpik(n)=0limnQn=0. So by lemma, (IQ)1 exists.

Fundamental Matrix

(IQ)1=i=0Qn is called the fundamental matrix of the absorbing Markov Chain.

Claim (Hitting Time)

Let ti be expected number of steps it takes to hit A given X0=iB. Then [t1t2tm]=(IQ)1[111].

4 Stationary Distribution

Stationary Distribution

The row vector π=(πi)iS is called a stationary distribution of the Markov Chain with transition probability matrix P, if

  • πi0,iS,iSπi=1.
  • πP=π. (i.e. π is the left eigenvector of P with eigenvalue 1.)

If X0π, then Xnπ,nN.

Does every Markov Chain has a stationary distribution? When it exists, is it unique?

Theorem (Perron-Frobenius)

  1. Every Markov Chain with finite S has a stationary distribution π.
  2. If in addition the chain is irreducible, then π is unique, and πi=ri1,iS, where ri=i=1nfii(n) is the mean recurrence time for state iS.